![]() Visualization system for medical images (Machine-translation by Google Translate, not legally bindin
专利摘要:
A system for displaying a medical image comprising an extraction unit (10) for extracting a plurality of compatible 2d images from a plurality of input files stored in a storage unit (12); a processing unit (18) for generating 3d models according to the established preferences; a 4d display unit (14) for displaying the generated 3d models; a gestural and tactile control unit (14) for receiving a tactile input or a gesture made by a user and sending an instruction to the processing unit (18) or to the display unit (14). (Machine-translation by Google Translate, not legally binding) 公开号:ES2649789A1 申请号:ES201630967 申请日:2016-07-14 公开日:2018-01-15 发明作者:Eduardo MIRPURI MERINO;Álvaro PÉREZ SALA;Aratz SETIÉN GUTIÉRREZ;Jorge Rafael LÓPEZ BENITO;Enara ARTETXE GONZÁLEZ 申请人:Creativitic Innova Sl;Creativitic Innova S L;Fundacion Rioja Salud; IPC主号:
专利说明:
Technical Field of the Invention The invention pertains to immersive visualization systems 40. In particular, it is applied for microscopy and medical imaging. Also in the educational, research and industrial area. Background of the invention or State of the Art Although 3D technologies are increasingly widespread (3D TVs, game consoles, etc.), three-dimensionality always comes from a two-dimensional interface, which limits immersion in the 3D image, as well as realistic handling 15 of the objects shown. Work tables are currently under development and are generally based on overhead projectors instead of 3D screens as in the present invention. Confocal microscopy 20 Confocal microscopy is a microscopic observation technique whose success is due to the undoubted advantages it offers compared to optical microscopy and, above all, to the possibility of obtaining "optical sections" of the sample, which allows its three-dimensional study. Samples observed with conventional optical microscopy are translucent. The light 25 interacts with the sample at various depths so that the image that reaches the observer presents blurred areas due to the light coming from areas outside the focus plane, which causes a degradation in the contrast and resolution of the image. The principle of confocal microscopy is based on eliminating reflected or fluorescent light coming from out-of-focus planes. For this a small light is illuminated 30 area of the sample with a laser light beam and the light beam that comes from the focal plane is taken, eliminating the beams coming from the lower and upper planes. In this way an image is obtained in which only the plane that is in focus appears (without any interference from the planes that are above or below). The process can be repeated in various planes of the sample, so that the information obtained allows to unravel the three-dimensional shape of the structure being observed. Applications of confocal microscopy Confocal microscopy is applied to the study of fluorescently labeled samples. These markings reveal three-dimensional cellular structures that would otherwise be impossible to observe, with 3D reconstruction being one of the outstanding applications of the technique to biological samples. Limitations of confocal microscopy The limitations of confocal microscopy can be grouped, fundamentally in two: The use of laser and the long exposure time of the sample causes the fluorescence to go out. Although the images that are captured are three-dimensional, their visualization is limited to the use of 2D monitors, so that the analysis of the images is done by an observer who sees the 2D samples. As a means to visualize 3D structures, videos are generated in which the image has a rotation movement. In this way, the researcher has an idea of the 3D aspect of the sample. However, today there is no system that allows direct visualization in 4D. Positron Emission Tomography (PET) PET, Positron Emission Tomography, is a widely used molecular imaging technique in the healthcare system as a tool for diagnosis and evaluation of neoplastic processes (cancer) as well as their evolution after the start of treatment. In this technique the image is generated by the detection of photons after the introduction of a radioactive compound (radiopharmaceutical! Marker! Radionuclide), inside the body. This marker, disintegrating due to its instability and having a short half-life, will emit positrons that in turn will produce photons that will be captured by a detector. PET applications The applications of this technology can be summarized as follows: Contribution to differential diagnosis, identification and pathological characterization, disease monitoring and treatment. PET limitations Despite the power and technical benefits, this technology has its limitations: The final form of visualization involves obtaining 3D images: the detection of the radiopharmaceutical allows obtaining images in different planes (axial, sagittal and coronal) and through specialized software integrates all these captures to obtain a final 3D image. Despite having 3D images and that the clinician can move through the different sections (levels), what is observed at all times are planar sections (a single plane). Although volume information is obtained, the displacement through the image (display screens) is in a single plane, so it is complicated and more laborious to make an adequate diagnosis and topographically locate a tumor structure within the body structure itself. The development of the new technology would make it possible to obtain more realistic and accessible images for healthcare professionals, which would contribute to a better diagnosis and treatment and, in general, to a clear improvement in clinical practice. More in detail, the advantages detected are: • It would be allowed to describe and further detail the image of patients referred to Surgery (Radioguided Surgery), which would contribute to: Reducing intervention times. • Reduce the appearance of complications. • It would offer a clear advantage and improvement when performing biopsies to more accurately determine the capture region and body environment. • It would improve the assessment of the response of tumors to chemotherapy or radiotherapy treatment. • The presentation of the images would be more intuitive, fast and accessible. An intensive previous experience in molecular image analysis would not be required. • It would facilitate the work of Training and Teaching of professionals with more understandable and intuitive images. Education sector There are numerous publishing houses that offer online solutions or software with 3D images, although the final visualization is always done on 2D devices. This proposal is advantageous because it is a tool that can replace more expensive "natural" models, introducing a realistic, economic, innovative, friendly element that facilitates learning. An example of use may be in dissection classrooms in the study of anatomy. 3D display systems Within the previously known hardware systems, the following should be mentioned: W020100621 17A2 proposes a system to interact with 3D content. The system provides an immersive environment to interact gesturally through user tracking sensors (using 3D and 2D cameras). Display media can be implemented with several flat screen or curved monitors. There is an embodiment of two flat screens arranged in approximately perpendicular planes. It proposes touch sensors as gloves. The present invention, in contrast, does not require the use of electronic glasses or cameras as sensors, reducing the user's eye strain and simplifying gestural control since by means of a simple device it allows remote operation with both hands. US2004109022A1 proposes the formation of images in three dimensions by means of a processor with an "immersion generator" with data from the position and orientation of a user. The immersion generator from non-immersive graphics of an image model generates a virtual representation of the image model. The present invention unlike has an automated engine that generates immersive models from standardized 2D graphics, resulting in 3D models made up of M partial, exportable and distributable models, not a virtual representation of it. Likewise, it should be noted that the present invention allows the use of a smaller specific hardware formed by two 3D screens that can be commercial. Existing software Given the current applications of the prototype for both medical images and confocal microscopy, the existing software solutions for each type are presented separately. There are two important standards, one is DICOM and the other is BIOFORMATS. DICOM (Digital Imaging and Communication in Medicine) is the worldwide recognized standard for the exchange of medical tests, designed for handling, viewing, storage, printing and transmission. DICOM files can be exchanged between two entities that have the capacity to receive images and patient data in DICOM format. DICOM allows the integration of scanners, servers, workstations, printers and network hardware from multiple suppliers within an image storage and communication system. BIOFORMATS is a software tool that allows you to read and write files from microscopes using a free and standardized format. Solutions for viewing DICOM images: • Slicer 30 - Free and open source OICOM image display software. In addition to including the general characteristics of any medical image viewer, it presents certain relevant aspects for the prototype developed. First, it is capable of displaying flat (non-stereoscopic) 3D reconstructions of patient images. However, these images are irrelevant because they are not modeled themselves, if not rendered from the image sets that make up the OICOM series. In no case do they perform an automatic segmentation and reconstruction of exportable 3D modeling to a standard format. However, it does allow reconstruction of models based on manual segmentation by the user, which are exportable using standardized modeling formats. • OsiriX -OICOM Advanced Medical Image Viewer, one of the most common for use in stations outside the marketed by manufacturers. Your license is expensive and has a free "Lite" version for non-medical uses. This paid version is also certified for use in medical diagnosis. Regarding the functionalities that are within us, it has a non-stereoscopic flat 3D rendering of the image series, and semi-automatic generation of exportable 3D modeling in standard formats. In this case the segmentation process is more automated and you just have to work by defining a series of parameters that in any case require knowledge about segmentation techniques. Solutions for viewing confocal microscopy images • Fiji (ImageJ) - Opensource image viewer with functionalities for viewing microscopy images. Going to the field that deals with the reconstruction of 3D images in several planes is quite limited. Us 5 allows you to create a 3D reconstruction simulation in the form of video. Which is a rendering of the different planes and is not exportable as a model, only in .avi format as video. • Vaa30-Opensource software for 3D visualization of microscopy images. It offers 20 vision of the image creating lateral cuts in addition 10 of the main image. It is capable of rendering a 3D reconstruction of the images from the different planes and generates exportable modeling • Volocity -View of microoscopy images paid by license. It includes advanced 3D viewing, allows you to make many adjustments to the viewing, despite not generating exportable models. Yes it allows instead the 15 generation of exportable and customizable multimedia content to suit the user, these recreations being much more advanced than in previous softwares. • Imaris -3D image visualization software, renders microscopy images with several planes with advanced options for generation 20 as for the export of multimedia content from the work done. It does not allow the complete export of the model as such. Gestural interface As for motion capture technologies, we find on the one hand tracking devices (controls) based on infrared sensors located in the 25 sides of the screens, and on the other, with technologies such as kinect or leap motion, the latter being the most attractive given their versatility and constant expansion. Brief Description of the Invention It would be desirable to have a system for direct visualization in 4D that 30 resolved the limitations identified in the state of the art, with the ability to change the image planes it assists to topographically locate a tumor structure within the body structure itself. The proposed visualization system is immersive and improves accuracy, speed and efficiency in clinical diagnosis in Medicine, as well as in Biomedical Research. It allows access to realistic 3D images for collaborative work and allows gestural manipulation. In addition to the use of real 3D images, we add the temporal component, since these images are dynamic and interact with the user. Therefore, it is named as a 4D display system, improving conventional 3D solutions. These advantages are achieved in the proposed system thanks to a series of units. There is an extraction unit for extracting a plurality of compatible 2D images from a plurality of input files stored in a storage unit; a processing unit to generate 3D models based on established preferences; a 4D display unit to visualize the 3D models generated with associated temporary information; a gestural and tactile control unit to receive a tactile input or a gesture made by a user and send an instruction to the processing unit or the display unit. Optionally, the extraction unit, in cooperation with the processing unit, checks whether the input files to be extracted are compatible with a specific medical standard. Optionally, the standard is BIOFORMATS or DICOM. Optionally, if the format is BIOFORMAT, the extraction unit extracts a layer associated with a tissue of interest according to the intensity range of the 2D image. Optionally, if the format is DICOM, the extraction unit extracts a layer associated with a tissue of interest according to its value on the Hounsfield scale. Optionally, the processing unit extracts a contour in the 2D image. Optionally, the display unit comprises an upper screen and a lower screen coupled to steroscopically combine four images associated with the 3D model generated in two planes corresponding to each screen and with two stereoscopic images per screen. Optionally, for each screen, the associated stereoscopic image pair has been generated by rendering with an interaxial distance and a focal distance that is a function of the angle formed between both screens. Optionally, the gesture and touch control unit is coupled with the lower display. Preferably, the angle between the upper screen and the lower screen is in the range of 90 ° to 120 °. More preferably, the angle is 105 °. Brief description of the figures Fig. 1: Schematic block diagram of the operation of the invention. Fig. 2: Layout angles lower and upper screens. Fig. 3: Viewing angles. Fig. 4: Lower screen angle. Fig. 5: Final layout of the prototype. Fig. 6: Information processing flow Fig. 7: Orderly extraction of data from files. Fig. 8: Virtual arrangement of artifact cameras. Detailed description of the invention For a better understanding of the invention, with reference to the previous figures, different embodiments are described which are to be considered as not limiting the scope of the system object of the invention. A schematic diagram with functional blocks depicting the main units in which the present embodiment can be structured is illustrated in Fig. 1. As can be seen, a storage unit 12 contains the starting information. An extraction unit 10 is responsible for accessing it and extracting images 20 with which a processing unit 18 performs the necessary tasks to be able to display them in a display unit 40 14. The tasks carried out by the processing unit 18 generate interactive 3D models in an automated manner. A model must be understood as a collection of points in space within a three-dimensional space, connected by various geometric entities such as triangles, lines, curved surfaces, etc., that is, a mathematical representation of any three-dimensional object (either inanimate or living ). 3D models can be made by hand, through algorithms or scanned. The processing unit 18 is coupled with a gestural and tactile control unit 16 which allows the 3D images shown in a 4D viewer to be manipulated The present embodiment takes storage unit 12 as input point. The extraction unit 10 scans the files stored in a given location showing the detected images that are compatible. By compatible we understand all those that comply with the DICOM protocol or those from standardized microscopy devices such as BIOFORMATS. With the selection of the input files to the system begins an automatic and optimized process in terms of performance, which analyzes the images contained and divides it into different temporary images (only the first time) depending on the internal structure of the system. It is necessary to differentiate the characteristics of the analysis performed according to the type of input image: BIOFORMATS It is usual to find large microscopy images that contain in a single file different S study series, each with e channels and in turn with K 2D images on the Z axis. Pre-processing in this case consists of the separation into S temporary files in TlFF format, each of which with the C * K images mentioned previously. DICOM All present files of this type are classified according to a criterion that separates them based on concepts such as patient, study, series within the study and spatial arrangement. In this way the system is capable of generating S temporary files of K images each, perfectly arranged spatially. For the generation of 3D models, two fundamental differences are proposed depending on the type of image to be processed. BIOFORMATS The system enables the selection of the number of layers to be extracted (a greater number, greater detail) and allows the determination of the intensity threshold values for the same. These values will be taken as a starting point for the generation of surfaces. The total number of partial models M generated is: M = e (contained channels) * N (number of layers to extract) 5 The system allows the selection of the tissues of interest and the particularization of the HU thresholds that characterize a specific fabric by modifying its preferences. The total number of partial models M generated is: M = N (number of selected tissues) the fifteen Regardless of the type of image processed, the selection of the compression level of the resulting model, as well as determining values in the extraction algorithm, is facilitated. Once this parallel process is finished, a 4D display unit 14 dynamically reconstructs the total 3D model as a sum of all these parts adapting the geometry. To achieve immersive reality it is only necessary to have passive 3D glasses, easily acquired and low cost. twenty A gestural and tactile control unit 16 facilitates the dynamic control of the generated model, allowing its modification on the fly; Given the processing technique developed and explained later in software development, there is no need to regenerate it to change the level of detail. Likewise, the gestural and tactile control unit 16 allows rotation-translation-scaling thereof and can modify the point of view as needed. This allows many fields of use. 25 The generated models become part of the history, and can be recharged as needed, without having to generate them again. Likewise, relocation of the work environment is possible by having the ability to import / export from / to external devices in a standardized format. Hardware development 30 The prototype developed includes two dual screens 50a, 50b arranged in book form with respect to their horizontal and forming a 105 ° angle between them as shown in FIG. 2 (or also FIG. 5). The present embodiment enables in the 50a lower touch screen to help us as a worktable and integrating the gesture recognition systems LeapMotion® and Myo®. We proceed to detail the process followed for the preparation of the device getting all the technical characteristics that were expected of it, and how each one has been resolved. Display The display unit 14 allows immersive vision that is based on stereoscopic 3D viewing technology. Specifically, it is based on the FPR (Film Patterned Reader) vision technology powered by LG® on its televisions with its registered trademark Cinema3D. This type of technology allows viewing 3D images using an optical principle of light polarization. This makes it possible to be observed using current lenses with a specific polarization. Each polarized lens blocks one of the two images emitted by the display unit 14 causing each eye to receive the corresponding image of its vision plane. The decision to use this technology is that electronic systems are not needed in the lenses used to observe the images, and that they allow fluid vision without eye strain. Alternative 3D technologies base their operation on the use of oscillating images: each frame is temporarily divided to show two images corresponding to the two planes of vision oscillating between them in an imperceptible way for the human eye. This oscillation process is synchronized with an electronic device coupled to the viewing lenses that obscures the corresponding glass opposite the vision plane, making each eye only reach its images. However, this generates a fatigue caused by the continuous image oscillation that despite being imperceptible is continuous and in a long time harmful to our eyes. In addition, it requires the use of heavier and more uncomfortable electronic lenses. The prototype development can be carried out with two SOa screens, LGPR SOPR LG® model LG49UB850V in a size of 16: 9 and 49 "diagonally. Development in this size does not condition other screen sizes. It can be scaled to smaller sizes as larger, as long as the base technology does not limit it. Given the manufacturer's specifications for optimal image visualization, the assembly is studied to ensure that the user is within the recommended viewing range. The condition for this is that the viewing angle of each of the horizontals of the screen [00 ... On] as indicated in Fig. 3 is never less than 75 °. The assembly of the upper screen 50a is not complicated since only the altitude adjustment of the screen and the user is required. In the case of the lower screen 50b, shown in FIG. 4, diagonally a system was devised by which one works with a good vision in another range of values for the angles [00 ... on). For this purpose, a follow-up of how viewing progresses over a wide range of incidence angles is carried out, from the horizontal 90 ° to an angle of 30 °. Based on this experience, it is concluded that the range [90 °, 75j provides correct polarization without shadows (as the manufacturer advances), after 75 ° the progressive coupling of both images is observed. However, it is discovered that after 60 ° and up to 30 ° of inclination they are decoupled again (as schematically indicated in Fig. 4) allowing the separation of the images by polarized lenses, with the disadvantage that an investment is produced: the image that should arrive on the right reaches the left and vice versa. To solve it, the bottom screen 50b is arranged turned horizontally, while maintaining the sense of the image. In this way, the range [90 °, 75j stops working correctly and becomes inverted, with the range [60 °, 30j being the one that works correctly in this case. As shown in Fig. 2, placing the screens at an angle of 105 °, the upper screen 50a can be seen from the ends 00 = 75 ° to 0, = 75 °, and the lower screen 50b from ¡3o = 30 ° to ¡ 31 = 45 ° (fully complying with the vision range devised for the lower screen). Thus concluding, that the user will have an optimal 3D field of vision on both screens. Physical assembly of the vision system To arrange the screens in their correct position, a customized clamping structure is made for placing the system on a conventional table. The final arrangement of the prototype is shown in Fig. 5 and how the screens 50a 50b are integrated thanks to a clamping structure 51 with the parameters specified above. In the assembly, the integral clamping structure 51 is distinguished, a series of plastic joints screwed between the structure 51 and the screens 50a, 50b themselves. Integration of control devices The embodiment includes a gestural and touch control unit 16 that integrates the LeapMotion® gesture control technology. In addition to gestural control, it is specified that the lower screen 50b be multitouch to allow its use as an interactive worktable. In order to meet this need, a study of existing technologies is made to allow tactile input on a large screen. Finally it is provided that the tactile input by infrared frames is convenient. This technology consists of superimposing an outer frame above the surface of the screen that emits a mesh of infrared signals. Those that when interrupted by the fingers capture the exact points where they are touching the screen. This touch technology also ensures that we do not have any type of electro-magnetic interference with the 3D FPR vision panels. The component used can be a "PQLabs G5S Overlay" touch frame that incorporates a tempered glass to protect the panel from shocks and pressures. This frame is integrated into the lower screen by means of a series of stickers on the edges of the touch device, thus being fixed to the lower panel creating a multitouch 3D screen. Computer equipment for device control A computer equipment responsible for managing the hardware and processing the images is included as the processing unit 18. For optimal performance in image processing and viewing, a device with a minimum of power is necessary, for the prototype developed a PC with the following characteristics was used: • Intel i7-5930K 6-core, 12-wire at 3.5 GHz processor • Fury RAM 32GB 00R4 2133 • 256GB M.2 SATA SSD hard drive • Gigabyle GA-X99-G5 board • NVIDIA TITAN X 12GB DDR5 x2 graphic The present embodiment supports any configuration with HoMI output for two screens. The choice of this hardware lies in the power of the medical image viewing software that has been developed specifically for this system and will be detailed below. Software development The software is responsible, on the one hand, for the preconditioning of the input images in the extraction unit 10 and the generation and extraction of the associated models in the processing unit 18 and, on the other, of the load and visualization of the 3D models in the display unit 14 that have been obtained as a result in the previous process. This entire process is carried out in an automated way (always under the user's specifications) but does not require interaction with the user except to advance in the stages of the process, thus preventing the user from needing technical knowledge about segmentation software and model extraction. For the generation of 3D models associated with an input series chosen by the user, 3 stages are followed mainly. Fig. 6 helps us to follow the process described below. 1. ANALYSIS The 3D reconstruction process begins when the user through the viewer control interface 601 selects in the storage unit 12 an input directory 611. An extraction unit 10 initiates the "scan" operation 621 that recursively scans each of the possible paths by detecting all images compatible with the system. By images compatible with the system we understand, on the one hand, any group of images that comply with the DI COM protocol and on the other hand all those that meet the OpenMicroscopy metadata standard. The main task of this phase is on the one hand to detect the microscopy images and on the other hand to group the series of DICOM images that contain a 3D volume. Those that do not contain a volume will be discarded. For compatibility checks of the formats mentioned above, we rely on two basic libraries: GDCM for reading DCM files and BIOFORMATS for those of the Openmicroscopy standard. The extraction unit 10 in cooperation with the process unit 18 throws several threads (depending on the HW processor) that scan all possible routes into the input directory 811 selected in the storage unit 12. These service units are responsible for checking the compatibility of each of the files. BIOWorker: In this case, the service unit is responsible for managing the BIOFORMATS files verifying the validity of the extension. DICOM Worker: In the case of oCM files, the associated service unit performs a series of more complex tasks due to the characteristics of this type of files. This unit is responsible for: one. Grouping: We must group all the DCMs that belong to the same patient. 2. Separation: We must separate the oCM groups that belong to the same study / series identifying the number of the same contents in the entire input series. 3. Ordination: We must ensure that each of these groups is correctly arranged so that the 3D volume recomposition is accurate. In FIG. 7 illustrates the orderly extraction of data from files. The characterization criterion depends on the labels marked by the DICOM protocol for this purpose, namely: Patient ID 701, Patient name, UID Study 702, Ulo Series 703, Serial description, UID Reference frame, IOP Image orientation with respect to Patient 710, Image Position, Acquisition Mode, Acquisition Number and Image Number. To do this we must follow an iterative process that is also responsible for checking a series of "special" cases. This process represented in Fig. 7, starts by checking if it finds the patient ID 701 in the files, then the study id 702 and the series 703 in this order. After that, check if the image set contains the IOP 704 tag. If it contains this tag, organize the images according to the same 706 and check if it has the IPP tag 708. If present, order according to this 710 tag. If there is no IPP or IOP before Sort by image number 705 and check if there are 707 duplicates. If they exist we try to eliminate them 709. Once the process is complete, the last step is to check if we consider ordering 711 correct. If so, we consider ordering 712 and otherwise discard 713. Finally we check if there are more 714 series and if they restart the process, otherwise we finish 715. The DCMs are grouped iteratively for each patient, each study, each series and within each of them we must group according to the reference image in cases where it can be done. In those that do not, we choose an index-based sorting that each DCM contains, paying special attention to the possibility of encountering repeated DCMs. This whole process results in an XML 612 file that contains information relevant to the system, including those relevant metadata such as spatial separation between consecutive images, pixel separation, etc. All compatible files are indexed using an alphanumeric key, which will serve as a reference for the entire process. 2. PROCESSING Returning to FIG. 6, it can be seen that the relevant information is presented to the user 602 to serve as an entry point to the image to be processed by the processing unit 18 and once the user selects the image to be treated, the processing is launched of the desired image 622. This phase focuses on the analysis of the selected image and its internal structure. Due to the conceptual differences of the two types of images that the present embodiment is capable of treating, we must separate the behavior into two sections: BIOFORMATS It is usual to find microscopy images that contain N series of K images each; the processing unit 18 is responsible for analyzing the characteristics of the image in question and through a parallel process obtain N temporary images in TIFF 613a format, each with K grouped in a directory 613. To do this, and through of the JVM (Java Virtual Machine) we communicate with the package developed for this purpose. This package is responsible for handling BIO files based on the java library of bioformats. In parallel, it divides into the different object files that will contain each of the series of which the original image is composed. Each resulting TIFF 613a file will contain all the Z planes associated with that particular series for each of its C channels. DICOM In the case of DCM files, the processing proceeds as follows: The structure of the DCM series of the validPaths.xml 612 file is read and grouped into a single output TIFF file 613a (for each series), since they are already organized. At the end of the analysis process of each of the types of images the result is similar: we obtain an encoded output directory 613 that contains all those temporary images 613a and a splitter.xml 614 file that contains a description of each of those images and their characteristics. 3. GENERATION OF 3D MODELS Once the process of processing the input images 622 is finished and when the temporary images have already been generated, we can start with the generation of the 3D model. The user then establishes the series that he wishes to generate 603 and begins the extraction process 623. This process differs somewhat depending on the type of image to be treated. BIOFORMATS Starting from the temporary input image 613a, the software studies the range of intensities available within it and depending on the user's preferences (number of layers to be extracted) or failing that of the value set in the settings file, divide the range of intensities in equal sections. These values will be those used by the surface extraction algorithm to delimit the values from which the surface should be created. For each input image we get a total of: M models = I channels x S surfaces to be extracted Once the number of surfaces to be extracted and their associated values have been decided, they are distributed between the tails containing the different extractors, which will be processed iteratively until the end of obtaining the desired M partial models. It could be the case that some of the surfaces are not finally generated since it could be the case that during the process null values are generated. DICOM In this case, what the extraction unit 10 seeks is the extraction of K layers associated with each of the tissues of interest (bone, muscle tissue, fatty tissue, etc ...). For this, the established values are taken as the starting point as in the Hounsfield scale. Depending on the preferences that the user has selected, the software takes these values as input and determines the number of models to be extracted, taking the default values (if any) of the H.U. (Hounsfield Units) as input for the surface extraction algorithm. Each of these models is stored in the folder corresponding to its 616 series in a subfolder with the name associated with the fabric, corresponding in the storage unit 12. For the generation of each model and its subsequent storage in obj 616a files, the extraction algorithm follows a common process for both types of images. We detail it below: • Contour extraction (depending on the minimum value and maximum value) • Elimination of duplicate points • Reduction of the number of polygons • Normal Calculation • Triangle generation • Smoothing • Stored The entire algorithm is implemented based on the VTK library In the root directory of each of the 616 series that we have extracted (regardless of whether it is a DCM or SIO file) an associated xml file model.xml 617 is also generated. This file contains information regarding the number of partial models M that have been generated, as well as information related to its geometry and characteristics that must be taken into account when making the total composition of the geometry. 4. DISPLAY Finally, visualization 604 is produced in the part of software developed for this purpose on Unity3D (graphic development engine) that is implemented in processing unit 18. This software is the "frontend" with which the user interacts. Once all the files of the 616a model are available, the 4D display unit 14 loads all the information from the mode / .xm / 617 file and analyzes all the data available for loading the image. It then calculates the number of individual files to be reconstructed, and creates a pool of threads of varying size, according to the specification of the processing unit 18 in which it is executed. With all this information, it launches in the execution pool as many threads as files to be read, each of them recreating in memory the geometry stored in the different obj 616a files. This concurrence allows a much more efficient load since most of the time we have a large number of files of moderate size, and rarely few large files. At the end of all the works in the execution pool we have accessible the geometric reconstruction (mesh of polygons) in memory. At this moment an algorithm designed to group each of the meshes according to the information read in the file mode / .xm / 617 is executed, also calculating the limits, the center and the position of the reconstructed complete model and the internal position of each One of the polygon meshes. Once this process is finished, we have a hierarchical structure of objects, which start from a root node that composes the complete model and contains each and every part separately. Once the physical reconstruction of the object has been carried out, a new process is launched to assign to each of its parts the appropriate display properties (color, intensity, transparency, texture). These properties are also defined in the model 617 information file and are read at the beginning. Once all the loading processes are finished, we have the model ready to visualize, so it is placed in a virtual scene of the graphic engine, which is responsible for generating the images that we will finally visualize. In the Unity3D engine, this scene is drawn using 803 camera artifacts from the graphics engine. These artifacts are internal components that generate a rendering (image of the virtual scene captured as if it were a video camera in real life) from a defined point. These will be focused on the part of the scene that interests us. Obviously in our case not only do you have to draw the scene in a normal way, but you have to combine the drawing of the two planes of vision (screens 50a and 50b), and also that this is stereoscopic. To achieve a stereoscopic rendering, two images are required from two points horizontally separated by a distance that we will call interaxial 802. These images will be generated by two 803 camera artifacts arranged as just described (see Fig. 8) and focusing on a point Far common called focus. The distance between each of the artifacts and the focus is called the focal length 801 and is the same for both artifacts. Later we will call this camera artifact system "3D camera artifact". Each of the vision planes needs to be rendered stereoscopically so that the images corresponding to each of them will be generated by two 3D camera artifacts. The vision planes correspond to each of the screens (50. and 50b). Which, as we have mentioned before, form an angle of 105 ° between them, making the normal ones of the corresponding vision planes intersect with an angle of 75 °. Therefore, both planes of vision are observed from the same point with an angle of 75 ° difference between each of them. To render then we will use two 3D camera artifacts located at the same origin point and focused in two directions separated 75 ° from each other. We will call this set "finar camera system. The final camera system changes the behavior of the camera artifacts it contains, capturing the image of each of them and combining them into a single output image. Obviously this image has a suitable arrangement so that the display on the screens is the desired one. This output image allows the upper part to generate the two images of the upper image corresponding to each of the vision planes, while the lower part does so for the lower screen. Finally, the viewing completes its functionalities with the gestural and tactile control unit 16 for the management of the control devices specified in the hardware section. All interaction with the user is done through interaction with the 50b touch screen and gestural recognition.
权利要求:
Claims (10) [1] 1_ System to visualize a medical image characterized by comprising: - an extraction unit (10) configured to extract a plurality of compatible 2D images from a plurality of input files stored in a storage unit (12); - a processing unit (18) configured to generate 3D models based on the established preferences; - a 4D display unit (14) configured to display the 3D models generated with associated temporary information; - a gesture and touch control unit (14) configured to receive a touch input or a gesture made by a user and send an instruction to the processing unit (18) or the display unit (14). [2] 2. System according to claim 1, characterized in that the extraction unit (10), in cooperation with the processing unit (18), is configured to check if the input files to be extracted are compatible with a certain medical standard. [3] 3. System according to claim 2, characterized in that the standard is BIOFORMATS or DICOM. [4] Four. System according to claim 3, characterized in that if the format is BIOFORMAT, the extraction unit (10) is configured to extract a layer associated with a tissue of interest according to the intensity range of the 2D image. [5] 5. System according to claim 3, characterized in that if the format is DICOM, the extraction unit (10) is configured to extract a layer associated with a tissue of interest according to its value on the Hounsfield scale. [6] 6. System according to any one of the preceding claims, characterized in that the processing unit (18) is configured to extract a contour in the 2D image. System according to any one of the preceding claims, characterized bythat the display unit comprises an upper screen (50a) and a screenbottom (50b) coupled to combine four images steroscopicallyassociated to the 3D model generated in two planes corresponding to each screen andwith two stereoscopic images per screen. [8] System according to claim 7, characterized in that for each screen (50a, 50b) the associated pair of stereoscopic images is generated by rendering at an interaxial distance (802) and at a focal distance (801) according to the angle formed between both screens (50a, 50b). [9] System according to any one of the preceding claims, characterized in that the gestural and tactile control unit (14) is coupled with the lower screen (50b). [10] System according to claim 4, characterized in that the angle between the upper screen (50a) and the lower screen (50b) is in the range of 90 ° to 120 °. [11] 11. System according to claim 5, characterized in that the angle is 105 °.
类似技术:
公开号 | 公开日 | 专利标题 EP2765776B1|2017-05-31|Graphical system with enhanced stereopsis Abhari et al.2014|Training for planning tumour resection: augmented reality and human factors Vidal et al.2006|Principles and applications of computer graphics in medicine US9956054B2|2018-05-01|Dynamic minimally invasive surgical-aware assistant JP5909055B2|2016-04-26|Image processing system, apparatus, method and program US20190279416A1|2019-09-12|Medical Imaging System based on HMDS US20130009957A1|2013-01-10|Image processing system, image processing device, image processing method, and medical image diagnostic device JP2013009040A|2013-01-10|Image processing system, device, and method CN103702612B|2016-05-04|Image processing system, device, method and medical diagnostic imaging apparatus US20170060253A1|2017-03-02|Method and apparatus for processing three-dimensional | object based on user interaction US20200265594A1|2020-08-20|Glasses-Free Determination of Absolute Motion US9426443B2|2016-08-23|Image processing system, terminal device, and image processing method US10127630B2|2018-11-13|Zooming of medical images CN103356290B|2015-11-18|Medical image processing system and method CN102833570A|2012-12-19|Image processing system, apparatus and method ES2649789B1|2019-01-04|MEDICAL IMAGE DISPLAY SYSTEM CN102892015A|2013-01-23|Image processing device, image processing method, and medical image diagnostic device Goodyer et al.2009|Using high-resolution displays for high-resolution cardiac data Kirmizibayrak2011|Interactive volume visualization and editing methods for surgical applications JP2013123227A|2013-06-20|Image processing system, device, method, and medical image diagnostic device US20200261157A1|2020-08-20|Aortic-Valve Replacement Annotation Using 3D Images Fairfield et al.2014|Volume curtaining: a focus+ context effect for multimodal volume visualization Luo2013|Effectively visualizing the spatial structure of cerebral blood vessels Goodyer et al.2007|3D visualization of cardiac anatomical MRI data with para-cellular resolution Kalavakonda2017|Isosurface Visualization Using Augmented Reality for Improving Tumor Resection Outcomes
同族专利:
公开号 | 公开日 ES2649789B1|2019-01-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 EP2840478A1|2013-08-23|2015-02-25|Samsung Medison Co., Ltd.|Method and apparatus for providing user interface for medical diagnostic apparatus| WO2016047173A1|2014-09-24|2016-03-31|オリンパス株式会社|Medical system| WO2016090336A1|2014-12-05|2016-06-09|Camplex, Inc.|Surgical visualization systems and displays|
法律状态:
2019-01-04| FG2A| Definitive protection|Ref document number: 2649789 Country of ref document: ES Kind code of ref document: B1 Effective date: 20190104 | 2021-12-03| FD2A| Announcement of lapse in spain|Effective date: 20211203 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 ES201630967A|ES2649789B1|2016-07-14|2016-07-14|MEDICAL IMAGE DISPLAY SYSTEM|ES201630967A| ES2649789B1|2016-07-14|2016-07-14|MEDICAL IMAGE DISPLAY SYSTEM| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|